One way to make the knowledge stored in an artificial neural network more intelligible is to extract symbolic rules. However,\nproducing rules from Multilayer Perceptrons (MLPs) is an NP-hard problem. Many techniques have been introduced to generate\nrules from single neural networks, but very few were proposed for ensembles. Moreover, experiments were rarely assessed by\n10-fold cross-validation trials. In this work, based on the Discretized Interpretable Multilayer Perceptron (DIMLP), experiments\nwere performed on 10 repetitions of stratified 10-fold cross-validation trials over 25 binary classification problems. The DIMLP\narchitecture allowed us to produce rules from DIMLP ensembles, boosted shallow trees (BSTs), and Support Vector Machines\n(SVM). The complexity of rulesets was measured with the average number of generated rules and average number of antecedents\nper rule. Fromthe 25 used classification problems, themost complex rulesetswere generated fromBSTs trained by ââ?¬Å?gentle boostingââ?¬Â\nand ââ?¬Å?real boosting.ââ?¬Â Moreover, we clearly observed that the less complex the rules were, the better their fidelity was. In fact, rules\ngenerated from decision stumps trained by modest boosting were, for almost all the 25 datasets, the simplest with the highest\nfidelity. Finally, in terms of average predictive accuracy and average ruleset complexity, the comparison of some of our results to\nthose reported in the literature proved to be competitive.
Loading....